Skip to content

Conversation

@juanmleng
Copy link
Contributor

@juanmleng juanmleng commented Jan 10, 2025

Internal Notes for Reviewers

Add two new notebooks for the scorecard model:

  • application_scorecard_ongoing_monitoring.ipynb: includes new metrics for data and model drift, and populates the ongoing monitoring document for the scorecard model
  • application_scorecard_executive.ipynb: high-level notebook that documents the scorecard model with just one command lending_club.document_model()

External Release Notes

Add two new notebooks for the scorecard model:

  • application_scorecard_ongoing_monitoring.ipynb: includes new metrics for data and model drift, and populates the ongoing monitoring document for the scorecard model
  • application_scorecard_executive.ipynb: high-level notebook that documents the scorecard model with just one command lending_club.document_model()

@juanmleng juanmleng added the internal Not to be externalized in the release notes label Jan 10, 2025
@juanmleng juanmleng self-assigned this Jan 10, 2025
Copy link
Contributor

@cachafla cachafla left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! Don't forget to bump the version before merging.

@github-actions
Copy link
Contributor

PR Summary

This pull request introduces several enhancements and bug fixes to the ValidMind Library, particularly focusing on credit risk scorecard models and ongoing monitoring capabilities. Key changes include:

  1. New Notebooks: Added new Jupyter notebooks for application scorecard models and ongoing monitoring, providing step-by-step guides for using the ValidMind Library with credit risk datasets.

    • application_scorecard_executive.ipynb: Demonstrates building and documenting an application scorecard model.
    • application_scorecard_full_suite.ipynb: Provides a comprehensive suite for testing application scorecards.
    • application_scorecard_with_ml.ipynb: Integrates machine learning models into the scorecard process.
    • application_scorecard_ongoing_monitoring.ipynb: Focuses on ongoing monitoring of application scorecards.
  2. Custom Tests: Introduced custom tests for scorecard models, allowing users to define and run their own tests using the ValidMind Library.

    • ScoreBandDiscriminationMetrics.py: Evaluates discrimination metrics across different score bands.
  3. Ongoing Monitoring Enhancements: Added new tests for ongoing monitoring of models, including:

    • CalibrationCurveDrift.py: Evaluates changes in probability calibration.
    • ClassDiscriminationDrift.py: Compares classification discrimination metrics.
    • ClassImbalanceDrift.py: Evaluates drift in class distribution.
    • ClassificationAccuracyDrift.py: Compares classification accuracy metrics.
    • ConfusionMatrixDrift.py: Compares confusion matrix metrics.
    • CumulativePredictionProbabilitiesDrift.py: Compares cumulative prediction probability distributions.
    • FeatureDrift.py: Evaluates changes in feature distribution.
    • PredictionAcrossEachFeature.py: Assesses prediction distributions across features.
    • PredictionCorrelation.py: Assesses correlation changes between predictions and features.
    • PredictionProbabilitiesHistogramDrift.py: Compares prediction probability distributions.
    • PredictionQuantilesAcrossFeatures.py: Assesses prediction distributions across features using quantiles.
    • ROCCurveDrift.py: Compares ROC curves.
    • ScoreBandsDrift.py: Analyzes drift in score bands.
    • ScorecardHistogramDrift.py: Compares score distributions.
    • TargetPredictionDistributionPlot.py: Assesses differences in prediction distributions.
  4. Dataset and Model Enhancements: Improved dataset loading, preprocessing, and feature engineering functions with verbosity control for cleaner output.

  5. Version Update: Updated the library version from 2.7.5 to 2.7.6.

  6. Integration Tests: Updated integration tests to skip certain ongoing monitoring tests that require specific conditions.

These changes enhance the ValidMind Library's capabilities in managing and monitoring credit risk models, providing users with more flexibility and insights into their model's performance.

Test Suggestions

  • Run the new Jupyter notebooks to ensure they execute without errors and produce expected outputs.
  • Test the custom tests for scorecard models to verify they correctly evaluate discrimination metrics.
  • Validate the ongoing monitoring tests by simulating data drift scenarios and checking if the tests detect changes accurately.
  • Check the integration of new tests with the ValidMind Platform to ensure results are logged and displayed correctly.
  • Verify the verbosity control in dataset loading and preprocessing functions to ensure it suppresses or displays output as expected.

@juanmleng juanmleng merged commit cfeb250 into main Jan 10, 2025
6 checks passed
@cachafla cachafla added enhancement New feature or request and removed internal Not to be externalized in the release notes labels Jan 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants